persistent landscape
PLLay: Efficient Topological Layer based on Persistent Landscapes
We propose PLLay, a novel topological layer for general deep learning models based on persistence landscapes, in which we can efficiently exploit the underlying topological features of the input data structure. In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a novel adaptation for the DTM function-based filtration, and show that the proposed layer is robust against noise and outliers through a stability analysis. We demonstrate the effectiveness of our approach by classification experiments on various datasets.
Review for NeurIPS paper: PLLay: Efficient Topological Layer based on Persistent Landscapes
Weaknesses: While I feel favourably about this paper, there are still some issues that prevent me from fully endorsing it. My primary reservation is that the experimental or practical benefits are clearly demonstrated and that the experimental setup and'presentation' is somewhat shallow: - When depicting accuracies in Figure 3, please show the standard deviations alongside the accuracies (as additional error bars, for example) in order to make the methods comparable. It seems to me that the achieved gains for MNIST are not necessarily significantly better than other methods, but I could be wrong here (I do appreciate the setup in terms of noise or'corruption' levels, though; I think more of these kinds of experiments are always helpful in ML!) - In addition to a better way of reporting the results, I would also like to see a more detailed comparison to baseline methods. I understand that the natural comparison partner consists of *other* topological layers. However, next to these neural network baseline, I would also suggest taking a look at kernels for persistence diagrams or metrics between them (though I could understand that the latter type of functions might not be efficiently computable). However, there are many kernel functions that could be equally well used here: - Carrière et al.: Sliced Wasserstein Kernel for Persistence Diagrams, ICML 2017 - Kusano et al.: Persistence weighted Gaussian kernel for topological data analysis, ICML 2016 - Reininghaus et al.: A Stable Multi-Scale Kernel for Topological Machine Learning, CVPR 2015 In a similar vein, the persistence landscape formulation also affords a kernel formulation; it would be interesting to see how the proposed method compares to this.
PLLay: Efficient Topological Layer based on Persistent Landscapes
We propose PLLay, a novel topological layer for general deep learning models based on persistence landscapes, in which we can efficiently exploit the underlying topological features of the input data structure. In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a novel adaptation for the DTM function-based filtration, and show that the proposed layer is robust against noise and outliers through a stability analysis.
Efficient Topological Layer based on Persistent Landscapes
Kim, Kwangho, Kim, Jisu, Kim, Joon Sik, Chazal, Frederic, Wasserman, Larry
We propose a novel topological layer for general deep learning models based on persistent landscapes, in which we can efficiently exploit underlying topological features of the input data structure. We use the robust DTM function and show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network architecture and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of the topological layer is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a tight stability theorem, and show that the proposed layer is robust towards noise and outliers. We demonstrate the effectiveness of our approach by classification experiments on various datasets.
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)